Goto

Collaborating Authors

 scene graph generation


LinkNet: Relational Embedding for Scene Graph

Neural Information Processing Systems

Objects and their relationships are critical contents for image understanding. A scene graph provides a structured description that captures these properties of an image. However, reasoning about the relationships between objects is very challenging and only a few recent works have attempted to solve the problem of generating a scene graph from an image. In this paper, we present a novel method that improves scene graph generation by explicitly modeling inter-dependency among the entire object instances. We design a simple and effective relational embedding module that enables our model to jointly represent connections among all related objects, rather than focus on an object in isolation. Our novel method significantly benefits two main parts of the scene graph generation task: object classification and relationship classification. Using it on top of a basic Faster R-CNN, our model achieves state-of-the-art results on the Visual Genome benchmark.



4D Panoptic Scene Graph Generation Jingkang Y ang

Neural Information Processing Systems

Traditional 3D scene graph methods may recognize the static elements of this scene, such as identifying a booth situated on the ground. However, a more ideal, advanced, and dynamic perception is required for real-world scenarios.







Joint Modeling of Visual Objects and Relations for Scene Graph Generation (Supplementary Material)

Neural Information Processing Systems

Now, we can exactly derive that q (G) = ˆ p( G|I) . The definitions of potential function φ and ψ follow those in JM-SGG model. Figure 1: The scene graphs generated by JM-SGG model. In these examples, factor update is able to correct some wrong relation labels ( e.g.


Joint Modeling of Visual Objects and Relations for Scene Graph Generation

Neural Information Processing Systems

An in-depth scene understanding usually requires recognizing all the objects and their relations in an image, encoded as a scene graph. Most existing approaches for scene graph generation first independently recognize each object and then predict their relations independently. Though these approaches are very efficient, they ignore the dependency between different objects as well as between their relations. In this paper, we propose a principled approach to jointly predict the entire scene graph by fully capturing the dependency between different objects and between their relations. Specifically, we establish a unified conditional random field (CRF) to model the joint distribution of all the objects and their relations in a scene graph. We carefully design the potential functions to enable relational reasoning among different objects according to knowledge graph embedding methods. We further propose an efficient and effective algorithm for inference based on mean-field variational inference, in which we first provide a warm initialization by independently predicting the objects and their relations according to the current model, followed by a few iterations of relational reasoning. Experimental results on both the relationship retrieval and zero-shot relationship retrieval tasks prove the efficiency and efficacy of our proposed approach.